TinyGAN: Distilling BigGAN for Conditional Image Generation

نویسندگان

چکیده

Generative Adversarial Networks (GANs) have become a powerful approach for generative image modeling. However, GANs are notorious their training instability, especially on large-scale, complex datasets. While the recent work of BigGAN has significantly improved quality generation ImageNet, it requires huge model, making hard to deploy resource-constrained devices. To reduce model size, we propose black-box knowledge distillation framework compressing GANs, which highlights stable and efficient process. Given as teacher network, manage train much smaller student network mimic its functionality, achieving competitive performance Inception FID scores with generator having \(16\times \) fewer parameters. (The source code trained publicly available at https://github.com/terarachang/ACCV_TinyGAN).

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Semi-supervised FusedGAN for Conditional Image Generation

We present FusedGAN, a deep network for conditional image synthesis with controllable sampling of diverse images. Fidelity, diversity and controllable sampling are the main quality measures of a good image generation model. Most existing models are insufficient in all three aspects. The FusedGAN can perform controllable sampling of diverse images with very high fidelity. We argue that controlla...

متن کامل

Conditional Image Generation with PixelCNN Decoders

This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, obje...

متن کامل

Attribute2Image: Conditional Image Generation from Visual Attributes

This paper investigates a novel problem of generating images from visual attributes. We model the image as a composite of foreground and background and develop a layered generative model with disentangled latent variables that can be learned end-to-end using a variational auto-encoder. We experiment with natural images of faces and birds and demonstrate that the proposed models are capable of g...

متن کامل

Conditional CycleGAN for Attribute Guided Face Image Generation

State-of-the-art techniques in Generative Adversarial Networks (GANs) such as cycleGAN is able to learn the mapping of one image domain X to another image domain Y using unpaired image data. We extend the cycleGAN to Conditional cycleGAN such that the mapping from X to Y is subjected to attribute condition Z. Using face image generation as an application example, where X is a low resolution fac...

متن کامل

Image Caption Generation with Text-Conditional Semantic Attention

Attention mechanisms have attracted considerable interest in image captioning due to its powerful performance. However, existing methods use only visual content as attention and whether textual context can improve attention in image captioning remains unsolved. To explore this problem, we propose a novel attention mechanism, called textconditional attention, which allows the caption generator t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2021

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-030-69538-5_31